15 research outputs found

    Hardware-accelerator aware VNF-chain recovery

    Get PDF
    Hardware-accelerators in Network Function Virtualization (NFV) environments have aided telecommunications companies (telcos) to reduce their expenditures by offloading compute-intensive VNFs to hardware-accelerators. To fully utilize the benefits of hardware-accelerators, VNF-chain recovery models need to be adapted. In this paper, we present an ILP model for optimizing prioritized recovery of VNF-chains in heterogeneous NFV environments following node failures. We also propose an accelerator-aware heuristic for solving prioritized VNF-chain recovery problems of large-size in a reasonable time. Evaluation results show that the performance of heuristic matches with that of ILP in regard to restoration of high and medium priority VNF-chains and a small penalty occurs only for low-priority VNF-chains

    Dynamic accelerator provisioning for SSH tunnels in NFV environments

    Get PDF
    In this demonstration, we present dynamic allocation of accelerator resources to SSH tunnels in an NFV environment. In order to accelerate a VNF, its compute-intensive operations are offloaded to hardware cores running on an FPGA. The CPU utilization information of VNFs is continuously processed by a service management component to dynamically decide the suitable target to run VNF's crypto-operations. We also demonstrate switching between the non-accelerated and hardware-accelerated SSH-tunnels triggered by a change in the nature of the data traffic flowing through the tunnel and indicate throughput gains obtainable in dynamically switching contexts

    Data-Driven Latency Probability Prediction for Wireless Networks: Focusing on Tail Probabilities

    Full text link
    With the emergence of new application areas, such as cyber-physical systems and human-in-the-loop applications, there is a need to guarantee a certain level of end-to-end network latency with extremely high reliability, e.g., 99.999%. While mechanisms specified under IEEE 802.1as time-sensitive networking (TSN) can be used to achieve these requirements for switched Ethernet networks, implementing TSN mechanisms in wireless networks is challenging due to their stochastic nature. To conform the wireless link to a reliability level of 99.999%, the behavior of extremely rare outliers in the latency probability distribution, or the tail of the distribution, must be analyzed and controlled. This work proposes predicting the tail of the latency distribution using state-of-the-art data-driven approaches, such as mixture density networks (MDN) and extreme value mixture models, to estimate the likelihood of rare latencies conditioned on the network parameters, which can be used to make more informed decisions in wireless transmission. Actual latency measurements of IEEE 802.11g (WiFi), commercial private and a software-defined 5G network are used to benchmark the proposed approaches and evaluate their sensitivities concerning the tail probabilities.Comment: Submitted to IEEE Global Communications (GLOBECOM) 2023 conferenc

    Dynamic hardware-acceleration of VNFs in NFV environments

    Get PDF
    In this paper, we describe a scheme for dynamically provisioning hardware-accelerator resources to virtual network functions (VNF) in an NFV environment. The scheme involves collaboration between various NFV components like service-specific manager (SSM) and element-management-systems (EMSs) for the management of accelerator resources. Accelerator resources are dynamically allocated to VNFs based on their resource usage information. We present the performance comparison of non-accelerated and accelerated SSH-client VNFs. We also demonstrate switching of accelerator resources between the concurrently running SSH-tunnels which is triggered by a change in the nature of the data traffic flowing through SSH-tunnels

    VNF-AAPC : accelerator-aware VNF placement and chaining

    Get PDF
    In recent years, telecom operators have been migrating towards network architectures based on Network Function Virtualization in order to reduce their high Capital Expenditure (CAPEX) and Operational Expenditure (OPEX). However, virtualization of some network functions is accompanied by a significant degradation of Virtual Network Function (VNF) performance in terms of their throughput or energy consumption. To address these challenges, use of hardware-accelerators, e.g. FPGAs, GPUs, to offload CPU-intensive operations from performance-critical VNFs has been proposed. Allocation of NFV infrastructure (NFVi) resources for VNF placement and chaining (VNF-PC) has been a major area of research recently. A variety of resources allocation models have been proposed to achieve various operator's objectives i.e. minimizing CAPEX, OPEX, latency, etc. However, the VNF-PC resource allocation problem for the case when NFVi incorporates hardware-accelerators remains unaddressed. Ignoring hardware-accelerators in NFVi while performing resource allocation for VNF-chains can nullify the advantages resulting from the use of hardware-accelerators. Therefore, accurate models and techniques for the accelerator-aware VNF-PC (VNF-AAPC) are needed in order to achieve the overall efficient utilization of all NFVi resources including hardware-accelerators. This paper investigates the problem of VNF-AAPC, i.e., how to allocate usual NFVi resources along-with hardware-accelerators to VNF-chains in a cost-efficient manner. Particularly, we propose two methods to tackle the VNF-AAPC problem. The first approach is based on Integer Linear Programming (ILP) which jointly optimizes VNF placement, chaining and accelerator allocation while concurring to all NFVi constraints. The second approach is a heuristic-based method that addresses the scalability issue of the ILP approach. The heuristic addresses the VNF-AAPC problem by following a two-step algorithm. The experimental evaluations indicate that incorporating accelerator-awareness in VNF-PC strategies can help operators to achieve additional cost-savings from the efficient allocation of hardware-accelerator resources

    VNF-AAP : accelerator-aware Virtual Network Function placement

    Get PDF
    Network Function Virtualization aims to migrate packet-processing tasks from special-purpose appliances to Virtual Network Functions (VNFs) running on x86 or ARM servers. However, achieving the line-rate packet-processing for VNFs running on CPUs can be a challenging task. External hardware accelerators can be used to offload heavy-lifting tasks (e.g. en/decryption and hashing) from performance-critical VNFs.State-of-the-art VNF placement algorithms only consider compute resources while assigning VNFs on server nodes. We propose a placement algorithm which takes into consideration hardware accelerator resources in addition to compute resources. For evaluation, we compare the performance of our approach with the Integer Linear Program (ILP) method and also with the state-of-the-art best-fit method of VNF placement

    Improving resource utilization with virtual media function decomposition

    Get PDF
    For many years, media transport and processing in professional media environments have been accomplished using specialized hardware. The flexibility of IP networking and Media Function Virtualization (MFV) can enable broadcasters to build cost-efficient architectures for the deployment of media services. These architectures can be further exploited to perform decomposition of Virtual Media Functions (VMFs) resulting in improved utilization of the MFV Infrastructure (MFVi) resources. To this end, this paper presents an algorithm aimed at optimizing the VMF Forwarding Graphs (VMF-FGs) media services. A first-fit based heuristic is also proposed for deploying media services on a given MFVi. The evaluation results indicate that VMF-FG decomposition can significantly improve the request acceptance ratio and reduce resource requirements

    VNF-AAP: Accelerator-aware Virtual Network Function Placement

    Get PDF
    Network Function Virtualization aims to migrate packet-processing tasks from special-purpose appliances to Virtual Network Functions (VNFs) running on x86 or ARM servers. However, achieving the line-rate packet-processing for VNFs running on CPUs can be a challenging task. External hardware accelerators can be used to offload heavy-lifting tasks (e.g. en/decryption and hashing) from performance-critical VNFs. State-of-the-art VNF placement algorithms only consider compute resources while assigning VNFs on server nodes. We propose a placement algorithm which takes into consideration hardware accelerator resources in addition to compute resources. For evaluation, we compare the performance of our approach with the Integer Linear Program (ILP) method and also with the state-of-the-art best-fit method of VNF placement

    ExPECA: An Experimental Platform for Trustworthy Edge Computing Applications

    Full text link
    This paper presents ExPECA, an edge computing and wireless communication research testbed designed to tackle two pressing challenges: comprehensive end-to-end experimentation and high levels of experimental reproducibility. Leveraging OpenStack-based Chameleon Infrastructure (CHI) framework for its proven flexibility and ease of operation, ExPECA is located in a unique, isolated underground facility, providing a highly controlled setting for wireless experiments. The testbed is engineered to facilitate integrated studies of both communication and computation, offering a diverse array of Software-Defined Radios (SDR) and Commercial Off-The-Shelf (COTS) wireless and wired links, as well as containerized computational environments. We exemplify the experimental possibilities of the testbed using OpenRTiST, a latency-sensitive, bandwidth-intensive application, and analyze its performance. Lastly, we highlight an array of research domains and experimental setups that stand to gain from ExPECA's features, including closed-loop applications and time-sensitive networking

    Optimization algorithms for the deployment of future network and media services

    No full text
    In de afgelopen jaren zijn nieuwe telecom-en mediaarchitecturen geïntroduceerd die gebaseerd zijn op goedkope Commercial off-the-shelf (COTS) hardware. Om de voordelen van deze architecturen echt te kunnen benutten, is een efficiënte toewijzing van middelen van essentieel belang. Dit proefschrift bespreekt algoritmen gericht op het optimaliseren van resource allocation in toekomstige telecom- en medianetwerken. Verder wordt verwacht dat toekomstige netwerken applicaties met uiteenlopende behoeften van het netwerk zullen ondersteunen. Hetzelfde onderliggende netwerk zal naar verwachting zowel het standaardverkeer ondersteunen dat geen beloftes van het netwerk vereist als het tijdsgevoelige verkeer dat timinggaranties van het netwerk verwacht. Hiervoor zijn algoritmen nodig die het verkeer in het netwerk plannen, zodat prestatiegaranties kunnen worden gegeven voor tijdgevoelig verkeer. Dit proefschrift behandelt ook optimalisatiealgoritmen met betrekking tot de planning van tijdgevoelig netwerkverkeer
    corecore